Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Multi-round conversational reinforcement learning recommendation algorithm via multi-granularity feedback
YAO Huayong, YE Dongyi, CHEN Zhaojiong
Journal of Computer Applications    2023, 43 (1): 15-21.   DOI: 10.11772/j.issn.1001-9081.2021111875
Abstract360)   HTML28)    PDF (1249KB)(208)       Save
Multi-round Conversational Recommendation System (CRS) obtains real-time information of users interactively, thus performing better than traditional recommendation methods such as collaborative filtering based method. However, existing CRS suffers from problems inaccurate mining of user preferences, too many conversational rounds required and inappropriate recommendation moments. Aiming at these problems, a new conversational recommendation algorithm based on deep reinforcement learning considering user’s multi-granularity feedback information was proposed. Different from existing CRS, in each conversation, the feedback of users on items themselves and more fine-grained item attributes was considered by the proposed algorithm at the same time. Then, users, items and attribute features of items were updated online by using the collected multi-granularity feedback, and the environment state after each round of conversation was analyzed by Deep Q-Network (DQN) algorithm. As a result, more appropriate and reasonable decisions were made by the system, and the reasons of why user buying items were analyzed and the users’ real-time preferences were mined comprehensively with fewer conversation rounds. Experimental results on two real datasets show that compared with Simple Conversational Path Reasoning (SCPR) algorithm, the proposed algorithm has the 15 turns success rate increased by 46.5%, and the 15 average turns decreased by 0.314 rounds in Last.fm dataset, while it maintains the same level of success rate but the 15 average turns decreased by 0.51 rounds in Yelp dataset.
Reference | Related Articles | Metrics
Under-sampling method based on sample density peaks for imbalanced data
SU Junning, YE Dongyi
Journal of Computer Applications    2020, 40 (1): 83-89.   DOI: 10.11772/j.issn.1001-9081.2019060962
Abstract395)      PDF (1034KB)(343)       Save
Imbalanced data classification is an important problem in data mining and machine learning. The way of re-sampling of data is crucial to the accuracy of classification. Concerning the problem that the existing under-sampling methods for imbalanced data cannot keep the distribution of sampling samples in good agreement with that of original samples, an under-sampling method based on sample density peaks was proposed. Firstly, the density peak clustering algorithm was applied to cluster samples of majority class and to estimate the central and boundary regions of different clusters obtained, so that each sample weight was determined according to the local density and different density peak distribution of cluster region where the sample was in. Then, the samples of majority class were under-sampled based on weights, so that the population of extracted majority class samples was gradually reduced from central region to boundary region of its cluster. In this way, the extracted samples would well reflect original sample distribution while suppressing the noise. Finally, a balanced data set was constructed by the sampled majority samples and all minority samples for the classifier training. The experimental results on multiple datasets show that the proposed sampling method has the F1-measure and G-mean improved, compared with some existing methods such as RBBag (Roughly Balanced Bagging), uNBBag (under-sampling NeighBorhood Bagging), KAcBag (K-means AdaCost bagging), proving that the proposed method is an effective and feasible sampling method.
Reference | Related Articles | Metrics
Dissimilar attribute reductions based on hierarchical clustering method
TANG ZhouWen YE DongYi
Journal of Computer Applications   
Abstract1666)      PDF (467KB)(837)       Save
Attribute reduction is an important concept in rough set data analysis. An algorithm for dissimilar reductions finding was presented. Method of bottomup agglomerative hierarchical clustering was used to get k partitions of conditional attributes set, and then these k attribute sets were dealt with by postprocessing and get k dissimilar attribute reductions. Result of experiments show that our method is effective.
Related Articles | Metrics